Nonconvex-nonconcave minimax optimization has been the focus of intense research over the last decade due to its broad applications in machine learning and operation research. Unfortunately, most existing algorithms cannot be guaranteed to converge and always suffer from limit cycles. Their global convergence relies on certain conditions that are difficult to check, including but not limited to the global Polyak-\L{}ojasiewicz condition, the existence of a solution satisfying the weak Minty variational inequality and $\alpha$-interaction dominant condition. In this paper, we develop the first provably convergent algorithm called doubly smoothed gradient descent ascent method, which gets rid of the limit cycle without requiring any additional conditions. We further show that the algorithm has an iteration complexity of $\mathcal{O}(\epsilon^{-4})$ for finding a game stationary point, which matches the best iteration complexity of single-loop algorithms under nonconcave-concave settings. The algorithm presented here opens up a new path for designing provable algorithms for nonconvex-nonconcave minimax optimization problems.
translated by 谷歌翻译
The optimal design of experiments typically involves solving an NP-hard combinatorial optimization problem. In this paper, we aim to develop a globally convergent and practically efficient optimization algorithm. Specifically, we consider a setting where the pre-treatment outcome data is available and the synthetic control estimator is invoked. The average treatment effect is estimated via the difference between the weighted average outcomes of the treated and control units, where the weights are learned from the observed data. {Under this setting, we surprisingly observed that the optimal experimental design problem could be reduced to a so-called \textit{phase synchronization} problem.} We solve this problem via a normalized variant of the generalized power method with spectral initialization. On the theoretical side, we establish the first global optimality guarantee for experiment design when pre-treatment data is sampled from certain data-generating processes. Empirically, we conduct extensive experiments to demonstrate the effectiveness of our method on both the US Bureau of Labor Statistics and the Abadie-Diemond-Hainmueller California Smoking Data. In terms of the root mean square error, our algorithm surpasses the random design by a large margin.
translated by 谷歌翻译
无限维功能空间之间的学习映射已在机器学习的许多学科中取得了经验成功,包括生成建模,功能数据分析,因果推理和多方面的增强学习。在本文中,我们研究了在两个无限维sobolev繁殖内核希尔伯特空间之间学习希尔伯特 - 施密特操作员的统计限制。我们根据Sobolev Hilbert-Schmidt规范建立了信息理论的下限,并表明一种正规化学习了偏见轮廓以下的光谱成分,并且忽略了差异高于方差轮廓的频谱成分可以达到最佳学习率。同时,偏置和方差轮廓之间的光谱成分为我们设计计算可行的机器学习算法的灵活性。基于此观察结果,我们开发了一种多级内核操作员学习算法,该算法在无限维函数空间之间学习线性运算符时是最佳的。
translated by 谷歌翻译
在阻碍强化学习(RL)到现实世界中的问题的原因之一,两个因素至关重要:与培训相比,数据有限和测试环境的不匹配。在本文中,我们试图通过分配强大的离线RL的问题同时解决这些问题。特别是,我们学习了一个从源环境中获得的历史数据,并优化了RL代理,并在扰动的环境中表现良好。此外,我们考虑将算法应用于大规模问题的线性函数近似。我们证明我们的算法可以实现$ O(1/\ sqrt {k})$的次级临时性,具体取决于线性函数尺寸$ d $,这似乎是在此设置中使用样品复杂性保证的第一个结果。进行了不同的实验以证明我们的理论发现,显示了我们算法与非持bust算法的优越性。
translated by 谷歌翻译
In this paper, we study the design and analysis of a class of efficient algorithms for computing the Gromov-Wasserstein (GW) distance tailored to large-scale graph learning tasks. Armed with the Luo-Tseng error bound condition~\citep{luo1992error}, two proposed algorithms, called Bregman Alternating Projected Gradient (BAPG) and hybrid Bregman Proximal Gradient (hBPG) enjoy the convergence guarantees. Upon task-specific properties, our analysis further provides novel theoretical insights to guide how to select the best-fit method. As a result, we are able to provide comprehensive experiments to validate the effectiveness of our methods on a host of tasks, including graph alignment, graph partition, and shape matching. In terms of both wall-clock time and modeling performance, the proposed methods achieve state-of-the-art results.
translated by 谷歌翻译
在本文中,我们研究了使用一般目标函数类别的嘈杂观测来解决梯度下降的Sobolev规范的统计限制。我们的目标功能类别包括用于内核回归的SOBOLEV培训,深层RITZ方法(DRM)和物理知识的神经网络(PINN),以解决椭圆形偏微分方程(PDES)作为特殊情况。我们考虑使用合适的再现核希尔伯特空间和通过内核积分运算符的定义对问题硬度的连续参数化考虑模型的潜在无限二维参数化。我们证明,该目标函数上的梯度下降也可以实现统计最佳性,并且数据的最佳通过数随样本量增加而增加。基于我们的理论,我们解释了使用SOBOLOLEV标准作为训练的目标函数的隐含加速度,推断出DRM的最佳时期数量在数据大小和任务的硬度增加时,DRM的最佳数量变得大于PINN的数量,尽管DRM和PINN都可以实现统计最佳性。
translated by 谷歌翻译
Using data from cardiovascular surgery patients with long and highly variable post-surgical lengths of stay (LOS), we develop a modeling framework to reduce recovery unit congestion. We estimate the LOS and its probability distribution using machine learning models, schedule procedures on a rolling basis using a variety of optimization models, and estimate performance with simulation. The machine learning models achieved only modest LOS prediction accuracy, despite access to a very rich set of patient characteristics. Compared to the current paper-based system used in the hospital, most optimization models failed to reduce congestion without increasing wait times for surgery. A conservative stochastic optimization with sufficient sampling to capture the long tail of the LOS distribution outperformed the current manual process and other stochastic and robust optimization approaches. These results highlight the perils of using oversimplified distributional models of LOS for scheduling procedures and the importance of using optimization methods well-suited to dealing with long-tailed behavior.
translated by 谷歌翻译
现代神经网络能够在涉及对象分类和图像生成的许多任务中执行至少和人类。然而,人类难以察觉的小扰动可能会显着降低训练有素的深神经网络的性能。我们提供了分布稳健的优化(DRO)框架,其集成了基于人的图像质量评估方法,以设计对人类来说难以察觉而难以察觉的最佳攻击,而是针对深度神经网络造成显着损害。通过广泛的实验,我们表明我们的攻击算法比其他最先进的人类难以察觉的攻击方法产生更好的质量(对人类)的攻击。此外,我们证明了使用我们最佳设计的人类难以察觉的攻击的DRO培训可以改善图像分类中的群体公平。在最后,我们提供了一种算法实现,以显着加速DRO训练,这可能是独立的兴趣。
translated by 谷歌翻译
在本文中,我们研究了使用深丽升方法(DRM)和物理信息的神经网络(Pinns)从随机样品求解椭圆局部微分方程(PDE)的深度学习技术的统计限制。为了简化问题,我们专注于原型椭圆PDE:SCHR \“odinginger方程,具有零的Dirichlet边界条件,其在量子 - 机械系统中具有广泛的应用。我们为两种方法建立了上下界,通过快速速率泛化绑定并发地改善了这个问题的上限。我们发现当前的深ritz方法是次优的,提出修改版本。我们还证明了Pinn和DRM的修改版本可以实现Minimax SoboLev空间的最佳限制。经验上,近期工作表明,根据权力法,我们提供了培训训练的深层模型精度,我们提供了计算实验,以显示对深PDE求解器的尺寸依赖权力法的类似行为。
translated by 谷歌翻译
我们提出了一个数据驱动的投资组合选择模型,该模型使用分布稳健优化的框架来整合侧面信息,条件估计和鲁棒性。投资组合经理在观察到的侧面信息上进行条件解决了一个分配问题,该问题可最大程度地减少最坏情况下的风险回收权衡权衡,但要受到最佳运输歧义集中协变量返回概率分布的所有可能扰动。尽管目标函数在概率措施中的非线性性质非线性,但我们表明,具有侧面信息问题的分布稳健的投资组合分配可以作为有限维优化问题进行重新纠正。如果基于均值变化或均值的风险标准做出投资组合的决策,则可以进一步简化所得的重新制定为二阶或半明确锥体程序。美国股票市场的实证研究证明了我们对其他基准的综合框架的优势。
translated by 谷歌翻译